Coordinate Dual Averaging for Decentralized Online Optimization With Nonseparable Global Objectives
نویسندگان
چکیده
منابع مشابه
Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions
In decentralized networks (of sensors, connected objects, etc.), there is an important need for efficient algorithms to optimize a global cost function, for instance to learn a global model from the local data collected by each computing unit. In this paper, we address the problem of decentralized minimization of pairwise functions of the data points, where these points are distributed over the...
متن کاملRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
In this paper, we address the problem of distributed learning over a decentralized network, arising from scenarios including distributed sensors or geographically separated data centers. We propose a fully distributed algorithm called random walk distributed dual averaging (RW-DDA) that only requires local updates. Our RW-DDA method, improves the existing distributed dual averaging (DDA) method...
متن کاملRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
In this paper, we address the problem of distributed learning over a large number of distributed sensors or geographically separated data centers, which suffer from sampling biases across nodes. We propose an algorithm called random walk distributed dual averaging (RW-DDA) method that only requires local updates and is fully distributed. Our RW-DDA method is robust to the change in network topo...
متن کاملDual Averaging Methods for Regularized Stochastic Learning and Online Optimization
We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as !1-norm for promoting sparsity. We develop extensions of Nesterov’s dual averaging method, that can exploit the regularization structure in an online setting...
متن کاملDual Averaging Method for Regularized Stochastic Learning and Online Optimization
We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularizatio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Control of Network Systems
سال: 2018
ISSN: 2325-5870
DOI: 10.1109/tcns.2016.2573639